Life 3.0 - Critical summary review - Max Tegmark
×

New Year, New You, New Heights. 🥂🍾 Kick Off 2024 with 70% OFF!

I WANT IT! 🤙
70% OFF

Operation Rescue is underway: 70% OFF on 12Min Premium!

New Year, New You, New Heights. 🥂🍾 Kick Off 2024 with 70% OFF!

801 reads ·  0 average rating ·  0 reviews

Life 3.0 - critical summary review

Life 3.0 Critical summary review Start your free trial
Technology & Innovation and Society & Politics

This microbook is a summary/original review based on the book: Life 3.0: Being Human in the Age of Artificial Intelligence

Available for: Read online, read in our mobile apps for iPhone/Android and send in PDF/EPUB/MOBI to Amazon Kindle.

ISBN: 1101946598

Publisher: Vintage

Critical summary review

Is artificial intelligence the best or the worst thing that can happen to humanity? Well-known Swedish American physicist and AI researcher Max Tegmark investigates this question in his spellbinding 2017 book, “Life 3.0.” So, get ready to speculate with him what lies ahead – and what may be at stake.

The three stages of life

Defining life is notoriously a difficult problem. However, for his book, Tegmark chooses the broadest definition possible and describes life as “a self-replicating information-processing system whose information (software, DNA) determines both its behavior and the blueprints for its hardware (cell, body).” 

The only place in the vast universe where this kind of process is known to occur is our planet. And, ever since it first appeared 4 billion years ago, it’s been constantly evolving and growing more complex. To make some things clearer, Tegmark shrugs off traditional biological taxonomy and starts off his discussion on humans in the age of AI by classifying all possible life-forms into three very broad groups, according to their levels of sophistication:

  • Life 1.0. The biological stage of life, in which organisms can only survive and replicate, but are unable to change. Instead, almost everything they do – and can do – is predetermined by their DNA. True, even 1.0 life-forms can change, but this only happens through evolution and only over many generations. Revisiting Tegmark’s definition of life, in the biological stage, organisms can design neither their hardware nor their software.
  • Life 2.0. The cultural stage in which living organisms can not only survive and replicate but also redesign much of their software. The sole example of Life 2.0 is humans – since we are routinely capable of learning complex new skills, such as languages and professions. Besides being smarter than Life 1.0 organisms, we also are much more flexible: bacteria's ability to adapt to environmental modifications happens slowly, changing through many generations; humans can adapt almost instantly by updating their software.  For example, soon after we realized that the coronavirus could be dangerous to our well-being, policies of social distancing were implemented. 
  • Life 3.0. Even though humans are more or less free from most of their genetic shackles, they are neither immortal nor omniscient. Moreover, we depend on our tools to evolve fast. Despite being capable of performing  “minor hardware upgrades” such as implanting pacemakers or artificial teeth, we can’t really redesign our hardware dramatically and get ten times taller or acquire a million times bigger brain. That would be the next step in evolution, the technological stage or Life 3.0, which can design not only its software but also its hardware. We have already created machines with neural networks, self-updateable – just like us: autodidact computer programs that learn to play chess exist. Called narrow AI, this should serve merely as an introduction to artificial general intelligence (AGI) – the ability of machines “to accomplish any cognitive task at least as well as humans.” Superintelligence and 3.0 life-forms could follow shortly after.

The future of AI: myths, facts and schools of thought

The universe originated 13.8 billion years ago, and Life 1.0 first appeared a staggering 10 billion years later. It took Life 1.0 about 4 billion years to evolve into Life 2.0, and then, in no more than 300,000 years, humans arrived on the brink of fashioning future 3.0 life-forms. 

“On the brink,” however, is only true in cosmological terms and doesn’t really mean much any other way. It is neither inevitable nor impossible: it may happen in decades, centuries, or never. The idea that we know the answer to when 3.0 life-forms will appear is one of the few most common AI myths and misconceptions. Another myth is that only Luddites worry about AI – many top AI researchers are also concerned. And yet a third myth is that AI will never be able to control humans. Just like we control tigers solely because we’re smarter, AI should be able to control us once it becomes more intelligent than us. Intelligence enables control. 

Although there are many other unknowns, the conversation around 3.0 life-forms among world-leading experts essentially centers around two questions: when (if ever) will it happen and what it will mean for humanity. Depending on their views, AI enthusiasts are into one of three groups:

  • Digital utopians. These are the optimists. Beyond believing  we will likely see 3.0 life-forms in this century, they wholeheartedly welcome this notion, viewing it as “the natural and desirable next step in the cosmic evolution.” Google’s Larry Page and inventor Ray Kurzweil are some of the best-known advocates of this view.
  • Techno-skeptics. The pessimists. For techno-skeptics, building superhuman AGI is next to impossible for the present generation of humans. They don’t see Life 3.0 happening any time soon and think it’s silly to worry about it now. In the words of its most prominent supporter, Coursera’s Andrew Ng, “fearing a rise of killer robots is like worrying about overpopulation on Mars.”
  • The beneficial-AI movement. A middle ground for the two above, the beneficial-AI movement believes that human-level AGI in this century is a real possibility but that a good outcome isn’t guaranteed at all. Consequently, it advocates for AI-safety research before it’s too late. Stephen Hawking, Elon Musk and the author himself – to name just a few – are members of the beneficial-AI movement.

The near future: breakthroughs, bugs, laws, weapons and jobs

It’s important to note straight away that neither the most passionate techno-skeptics question the potential of AI to profoundly influence our lives in the very near future. Regardless of when – or even if – AI will reach a human level for all skills (AGI), it won’t take long before narrow AI irretrievably changes how we live our lives and deal with some of our most pressing issues if progress continues at the current rate. We’re not merely talking about self-driving cars and surgical bots – we’re talking about more just and more equal societies as well.

For example, as advanced as our legal systems are relative to those of previous times, they are still fraught with innumerable problems. Though human intelligence is remarkably broad, humans are fallible and biased. On the other hand, as incipient as today’s AI is, it is immensely better at solving delineated computational problems than us. If viewed abstractly, the legal system is nothing more but a complex computational problem, the input being information about evidence, laws, and the output - an appropriate decision. Unlike humans, the AI-based “robojudges” of the future should be able to “tirelessly apply the same high legal standards to every judgment without succumbing to human errors such as bias, fatigue, or lack of the latest knowledge.” 

It’s not even a question anymore whether AI can make our electric power systems more efficient, but, theoretically, this should also be true for something far more important: economic order. AI is already replacing humans on the job market, and it may even be soon capable of distributing wealth more justly. Moreover, AI-powered drones and other autonomous weapon systems (AWS) can even make wars more humane: “if wars consist merely of machines fighting machines, then no human soldiers or civilians need to get killed.”

Unfortunately, all of this comes with caveats. For example, “our laws need rapid updating to keep up with AI, which poses tough legal questions involving privacy, liability, and regulation.” Just as well, if the AI-created wealth doesn’t get redistributed, then inequality will greatly increase in the low-employment society of the future. In such a society, few things can be worse than AWS, “available to everybody with a full wallet and an ax to grind,” reminds the author. “When we allow real-world systems to be controlled by AI,” Tegmark warns, “it’s crucial that we learn to make AI more robust, doing what we want it to do. This boils down to solving tough technical problems related to verification, validation, security, and control.”

Intelligence explosion, and a few speculative timelines of the next 10,000 years

Unless regulated, AGI and subsequent 3.0 life-forms can take over the world. As far-fetched as this might look, the three steps that separate us from such future are both logical and consequential:

  • Step 1: Build human-level AGI.
  • Step 2: Use this AGI to create superintelligence.
  • Step 3: Use or unleash this superintelligence to take over the world.

Of the three steps, the first seems most challenging. But once we build AGI – “the holy grail of AI research” – the resulting machine should be “capable enough to recursively design ever-better AGI” and cause an intelligence explosion – leading to the second step. However, there is absolutely no consensus on whether that will happen and where it might leave us as humans. AI experts have postulated and debated several possible scenarios for the third step that can be grouped into three broader categories:

  • Peaceful coexistence. Since it will be preprogrammed to be good, even when it evolves superintelligence, AGI will remain ostensibly friendly to humans. In the best-case scenarios, either humans and robots will cohabit a “libertarian utopia,” or humans will be altruistically and efficiently ruled by AGI systems which will help manage to finally establish an “egalitarian paradise,” ruled by a “benevolent AGI dictator.” In the worst-case scenario of peaceful coexistence, AGI will treat us the same way we treat chimpanzees and basically become our zookeeper.
  • Kill switch. Superintelligence may never happen because relevant research might be banned by “a global human-led Orwellian surveillance state” (the 1984 scenario). An AI-based gatekeeper may also prevent it, “a superintelligence with the goal of interfering as little as necessary to prevent the creation of another superintelligence.” Finally, it can be prevented by humans understanding the threats at just the right moment and deciding to revert to more primitive methods to save themselves.
  • The end of humanity. Finally, humanity can, indeed, go extinct and get replaced by AIs in the self-explanatory conqueror and descendant scenarios. Amazingly, there’s an even worse scenario: humans may also devise AWS that will ultimately destroy both our societies and AI, leaving nothing behind.

“The climax of our current race toward AI may be either the best or the worst thing ever to happen to humanity,” notes Tegmark. He also warns, hauntingly, that we must take into consideration all possible outcomes, and “start thinking hard about which outcome we prefer and how to steer in that direction. Because if we don’t know what we want, we’re unlikely to get it.”

Final Notes

Selected as one of Barack Obama’s favorite books of 2018, Max Tegmark’s “Life 3.0” is a riveting and thought-provoking book on “the most important conversation of our time.” It is also, in the words of Elon Musk, “a compelling guide to the challenges and choices in our quest for a great future of life, intelligence and consciousness – on Earth and beyond.” A must-read.

12min Tip

Though AGI is (at least) decades away from being a reality, AI is already transforming the world. Consequently, Tegmark’s – and our – career advice for today’s kids: “Go into professions that machines are bad at – those involving people, unpredictability, and creativity.”

Sign up and read for free!

By signing up, you will get a free 7-day Trial to enjoy everything that 12min has to offer.

Who wrote the book?

Max Tegmark is a Swedish American physicist, cosmologist, and AI scholar. A professor at the Massachusetts Institute of Technology and the scientific director of the Foundational Questi... (Read more)

Start learning more with 12min

6 Milllion

Total downloads

4.8 Rating

on Apple Store and Google Play

91%

of 12min users improve their reading habits

A small investment for an amazing opportunity

Grow exponentially with the access to powerful insights from over 2,500 nonfiction microbooks.

Today

Start enjoying 12min's extensive library

Day 5

Don't worry, we'll send you a reminder that your free trial expires soon

Day 7

Free Trial ends here

Get 7-day unlimited access. With 12min, start learning today and invest in yourself for just USD $4.14 per month. Cancel before the trial ends and you won't be charged.

Start your free trial

More than 70,000 5-star reviews

Start your free trial

12min in the media